Online Speedup Learning for Optimal Planning

نویسندگان

  • Carmel Domshlak
  • Erez Karpas
  • Shaul Markovitch
چکیده

Domain-independent planning is one of the foundational areas in the field of Artificial Intelligence. A description of a planning task consists of an initial world state, a goal, and a set of actions for modifying the world state. The objective is to find a sequence of actions, that is, a plan, that transforms the initial world state into a goal state. In optimal planning, we are interested in finding not just a plan, but one of the cheapest plans. A prominent approach to optimal planning these days is heuristic state-space search, guided by admissible heuristic functions. Numerous admissible heuristics have been developed, each with its own strengths and weaknesses, and it is well known that there is no single “best” heuristic for optimal planning in general. Thus, which heuristic to choose for a given planning task is a difficult question. This difficulty can be avoided by combining several heuristics, but that requires computing numerous heuristic estimates at each state, and the tradeoff between the time spent doing so and the time saved by the combined advantages of the different heuristics might be high. We present a novel method that reduces the cost of combining admissible heuristics for optimal planning, while maintaining its benefits. Using an idealized search space model, we formulate a decision rule for choosing the best heuristic to compute at each state. We then present an active online learning approach for learning a classifier with that decision rule as the target concept, and employ the learned classifier to decide which heuristic to compute at each state. We evaluate this technique empirically, and show that it substantially outperforms the standard method for combining several heuristics via their pointwise maximum.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Two Techniques for Tractable Decision-Theoretic Planning

Two techniques to provide inferential tractability for decision-theoretic planning are discussed. Knowledge compilation is a speedup technique that pre-computes intermediate quantifies of a domain model to yield a more efficiently executable model. The basic options for compiling decision-theoretic models are defined, leading to the most compiled model in the form of parameterized, conditionact...

متن کامل

Investigating university students' views on online learning

Online learning is a concept that has received attention due to new technologies in the field of education; But today, due to the sudden spread of the corona virus, online learning has become common, so that most of the higher education institutions organize online learning courses. However, for many students, especially new undergraduate students who are used to the traditional learning enviro...

متن کامل

Cost-Based Learning for Planning

Most learning in planners to date has been focused on speedup learning. Recently the focus has been more on learning to improve plan quality. We introduce a different dimension: learning not just from failed plans, but learning from inefficient plans. We call this cost-based learning (CAL). CBL can be used to improve both plan quality and provide speedup learning. We show how cost-based learnin...

متن کامل

Multi-Strategy Learning of Search Control for Partial-Order Planning

Most research in planning and learning has involved linear, state-based planners. This paper presents Scope, a system for learning search-control rules that improve the performance of a partial-order planner. Scope integrates explanation-based and inductive learning techniques to acquire control rules for a partial-order planner. Learned rules are in the form of selection heuristics that help t...

متن کامل

Explanation-based Learning and Reinforcement Learning: a Uniied View

In speedup learning problems where full descriptions of operators are known both explanation based learning EBL and reinforcement learning RL methods can be applied This paper shows that both methods involve fundamentally the same process of propagating information backward from the goal toward the starting state Most RL methods perform this propagation on a state by state basis while EBL metho...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:
  • J. Artif. Intell. Res.

دوره 44  شماره 

صفحات  -

تاریخ انتشار 2012